Goto

Collaborating Authors

 encoder gap


Supplementary Material for Adversarial Robustness of Supervised Sparse Coding A Encoder Gap for k-sparse signals Herein we show that a positive encoder gap exists for signals that are (approximately)

Neural Information Processing Systems

Herein we prove our generalization bound, but first re-state it for completeness. Let us bound the first of these terms. Let us now focus on the second and third terms in Eq. Likewise, null is Lipschitz continuous w.r.t w, null null null( y,f Note that the third term in Eq. In this section we prove the key result in Lemma 4.2, guaranteeing that the perturbation in the encoded With this result at hand, the proof of Lemma 4.2 follows directly from the Remark B.2. F or the setting above, we have that a) null( x The proof mimics that in [Mehta and Gray, 2013, Lemma 10-11], though accommodating for the adversarial perturbation.






We thank the reviewers for their encouraging and instructive comments, and the AC for guiding the review process

Neural Information Processing Systems

We thank the reviewers for their encouraging and instructive comments, and the AC for guiding the review process. Gray (2013), and may look a bit too complicated. We will add a remark in line with our comment above. Note that the assumption on encoder gap is very mild. R2: It is not clear that sparsity-promoting encoders are the right models to be studying. Ours is the first work to address this.


Review for NeurIPS paper: Adversarial Robustness of Supervised Sparse Coding

Neural Information Processing Systems

Additional Feedback: Overall, the work achieves new and interesting theoretical results for the model being studied. My main worry is the lack of experimental results on the encoder gap for datasets beyond MNIST, especially given that the size/existence of the encoder gap is crucial to the theoretical results and is an assumption made in the theoretical claims. Thus, I would highly recommend at least evaluating the encoder gap for other (more complex than MNIST) datasets. Many techniques that work well on MNIST may not work on other datasets due to MNIST's relative simplicity. For example, a network that binarizes pixel values (converts everything below 0.5 to 0, everything above to 1) and then classifies the result is quite adversarially robust, but the same technique will not work for more complex datasets.


Adversarial Robustness of Supervised Sparse Coding

Sulam, Jeremias, Muthumukar, Ramchandran, Arora, Raman

arXiv.org Machine Learning

Several recent results provide theoretical insights into the phenomena of adversarial examples. Existing results, however, are often limited due to a gap between the simplicity of the models studied and the complexity of those deployed in practice. In this work, we strike a better balance by considering a model that involves learning a representation while at the same time giving a precise generalization bound and a robustness certificate. We focus on the hypothesis class obtained by combining a sparsity-promoting encoder coupled with a linear classifier, and show an interesting interplay between the expressivity and stability of the (supervised) representation map and a notion of margin in the feature space. We bound the robust risk (to $\ell_2$-bounded perturbations) of hypotheses parameterized by dictionaries that achieve a mild encoder gap on training data. Furthermore, we provide a robustness certificate for end-to-end classification. We demonstrate the applicability of our analysis by computing certified accuracy on real data, and compare with other alternatives for certified robustness.